unique value
On detection probabilities of link invariants
Kelomäki, Tuomas, Lacabanne, Abel, Tubbenhauer, Daniel, Vaz, Pedro, Zhang, Victor L.
We prove that the detection rate of n-crossing alternating links by many standard link invariants decays exponentially in n, implying that they detect alternating links with probability zero. This phenomenon applies broadly, in particular to the Jones and HOMFLYPT polynomials and integral Khovanov homology. We also use a big-data approach to analyze knots and provide evidence that, for knots as well, these invariants exhibit the same asymptotic failure of detection.
- Oceania > Australia > New South Wales (0.04)
- North America > United States > New York (0.04)
- North America > United States > New Jersey > Bergen County > Hackensack (0.04)
- (6 more...)
Your Title
The memory process Γ (t) is an ergodic Markov Chain on S . We emphasize that S is simply the set of "feasible states", where the values in U We next show that Γ ( t) on S is an ergodic chain. We conclude that Γ ( t) on S is an ergodic Markov chain by definition. G is a connected graph, and that the communication protocol is proper. G. Therefore, using the properness of the communication protocol, there is a positive probability that for any n,m, a message from player m can reach player n, containing the reward of player m (that is always known to it).
Quantum Inspired Encoding Strategies for Machine Learning Models: Proposing and Evaluating Instance Level, Global Discrete, and Class Conditional Representations
In this study, we propose, evaluate and compare three quantum inspired data encoding strategies, Instance Level Strategy (ILS), Global Discrete Strategy (GDS) and Class Conditional Value Strategy (CCVS), for transforming classical data into quantum data for use in pure classical machine learning models. The primary objective is to reduce high encoding time while ensuring correct encoding values and analyzing their impact on classification performance. The Instance Level Strategy treats each row of dataset independently; mimics local quantum states. Global Discrete Value Based encoding strategy maps all unique feature values across the full dataset to quantum states uniformly. In contrast, the Class conditional Value based encoding strategy encodes unique values separately for each class, preserving class dependent information. We apply these encoding strategies to a classification task and assess their impact on en-coding efficiency, correctness, model accuracy, and computational cost. By analyzing the trade offs between encoding time, precision, and predictive performance, this study provides insights into optimizing quantum inspired data transformations for classical machine learning workflows.
Customizing generative AI for unique value
Since the emergence of enterprise-grade generative AI, organizations have tapped into the rich capabilities of foundational models, developed by the likes of OpenAI, Google DeepMind, Mistral, and others. Over time, however, businesses often found these models limiting since they were trained on vast troves of public data. Enter customization--the practice of adapting large language models (LLMs) to better suit a business's specific needs by incorporating its own data and expertise, teaching a model new skills or tasks, or optimizing prompts and data retrieval. Customization is not new, but the early tools were fairly rudimentary, and technology and development teams were often unsure how to do it. That's changing, and the customization methods and tools available today are giving businesses greater opportunities to create unique value from their AI models.
Temporal Fair Division of Indivisible Items
Elkind, Edith, Lam, Alexander, Latifian, Mohamad, Neoh, Tzeh Yuan, Teh, Nicholas
We study a fair division model where indivisible items arrive sequentially, and must be allocated immediately and irrevocably. Previous work on online fair division has shown impossibility results in achieving approximate envy-freeness under these constraints. In contrast, we consider an informed setting where the algorithm has complete knowledge of future items, and aim to ensure that the cumulative allocation at each round satisfies approximate envy-freeness -- which we define as temporal envy-freeness up to one item (TEF1). We focus on settings where items can be exclusively goods or exclusively chores. For goods, while TEF1 allocations may not always exist, we identify several special cases where they do -- two agents, two item types, generalized binary valuations, unimodal preferences -- and provide polynomial-time algorithms for these cases. We also prove that determining the existence of a TEF1 allocation is NP-hard. For chores, we establish analogous results for the special cases, but present a slightly weaker intractability result. We also establish the incompatibility between TEF1 and Pareto-optimality, with the implication that it is intractable to find a TEF1 allocation that maximizes any $p$-mean welfare, even for two agents.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Asia > China > Hong Kong (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Singapore (0.04)
Value-Compressed Sparse Column (VCSC): Sparse Matrix Storage for Redundant Data
Ruiter, Skyler, Wolfgang, Seth, Tunnell, Marc, Triche, Timothy Jr., Carrier, Erin, DeBruine, Zachary
Compressed Sparse Column (CSC) and Coordinate (COO) are popular compression formats for sparse matrices. However, both CSC and COO are general purpose and cannot take advantage of any of the properties of the data other than sparsity, such as data redundancy. Highly redundant sparse data is common in many machine learning applications, such as genomics, and is often too large for in-core computation using conventional sparse storage formats. In this paper, we present two extensions to CSC: (1) Value-Compressed Sparse Column (VCSC) and (2) Index- and Value-Compressed Sparse Column (IVCSC). VCSC takes advantage of high redundancy within a column to further compress data up to 3-fold over COO and 2.25-fold over CSC, without significant negative impact to performance characteristics. IVCSC extends VCSC by compressing index arrays through delta encoding and byte-packing, achieving a 10-fold decrease in memory usage over COO and 7.5-fold decrease over CSC. Our benchmarks on simulated and real data show that VCSC and IVCSC can be read in compressed form with little added computational cost. These two novel compression formats offer a broadly useful solution to encoding and reading redundant sparse data.
- North America > United States > Michigan > Ottawa County > Allendale (0.04)
- North America > United States > Michigan > Kent County > Grand Rapids (0.04)
Using Decision Trees for Interpretable Supervised Clustering
Kokash, Natallia, Makhnist, Leonid
In this paper, we address an issue of finding explainable clusters of class-uniform data in labelled datasets. The issue falls into the domain of interpretable supervised clustering. Unlike traditional clustering, supervised clustering aims at forming clusters of labelled data with high probability densities. We are particularly interested in finding clusters of data of a given class and describing the clusters with the set of comprehensive rules. We propose an iterative method to extract high-density clusters with the help of decisiontree-based classifiers as the most intuitive learning method, and discuss the method of node selection to maximize quality of identified groups.
- Europe > Netherlands > North Holland > Amsterdam (0.06)
- Asia > China > Shanghai > Shanghai (0.05)
- South America > Peru (0.04)
- (34 more...)
Common Subexpression-based Compression and Multiplication of Sparse Constant Matrices
In deep learning inference, model parameters are pruned and quantized to reduce the model size. Compression methods and common subexpression (CSE) elimination algorithms are applied on sparse constant matrices to deploy the models on low-cost embedded devices. However, the state-of-the-art CSE elimination methods do not scale well for handling large matrices. They reach hours for extracting CSEs in a $200 \times 200$ matrix while their matrix multiplication algorithms execute longer than the conventional matrix multiplication methods. Besides, there exist no compression methods for matrices utilizing CSEs. As a remedy to this problem, a random search-based algorithm is proposed in this paper to extract CSEs in the column pairs of a constant matrix. It produces an adder tree for a $1000 \times 1000$ matrix in a minute. To compress the adder tree, this paper presents a compression format by extending the Compressed Sparse Row (CSR) to include CSEs. While compression rates of more than $50\%$ can be achieved compared to the original CSR format, simulations for a single-core embedded system show that the matrix multiplication execution time can be reduced by $20\%$.
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Asia > India > Goa (0.04)